1,237 research outputs found

    Integrating Prosodic and Lexical Cues for Automatic Topic Segmentation

    Get PDF
    We present a probabilistic model that uses both prosodic and lexical cues for the automatic segmentation of speech into topically coherent units. We propose two methods for combining lexical and prosodic information using hidden Markov models and decision trees. Lexical information is obtained from a speech recognizer, and prosodic features are extracted automatically from speech waveforms. We evaluate our approach on the Broadcast News corpus, using the DARPA-TDT evaluation metrics. Results show that the prosodic model alone is competitive with word-based segmentation methods. Furthermore, we achieve a significant reduction in error by combining the prosodic and word-based knowledge sources.Comment: 27 pages, 8 figure

    Strategies for distributing goals in a team of cooperative agents

    Get PDF
    This paper addresses the problem of distributing goals to individual agents inside a team of cooperative agents. It shows that several parameters determine the goals of particular agents. The first parameter is the set of goals allocated to the team; the second parameter is the description of the real actual world; the third parameter is the description of the agents' ability and commitments. The last parameter is the strategy the team agrees on: for each precise goal, the team may define several strategies which are orders between agents representing, for instance, their relative competence or their relative cost. This paper also shows how to combine strategies. The method used here assumes an order of priority between strategie

    Prosody Modelling in Concept-to-Speech Generation: Methodological Issues

    Get PDF
    We explore three issues for the development of concept-to-speech (CTS) systems. We identify information available in a language-generation system that has the potential to impact prosody; investigate the role played by different corpora in CTS prosody modelling; and explore different methodologies for learning how linguistic features impact prosody. Our major focus is on the comparison of two machine learning methodologies: generalized rule induction and memory-based learning. We describe this work in the context of multimedia abstract generation of intensive care (MAGIC) data, a system that produces multimedia brings of the status of patients who have just undergone a bypass operation

    Validating soil denitrification models based on laboratory N2 and N2O fluxes and underlying processes derived by stable isotope approaches: concept, methods and regulation of measured fluxes

    Get PDF
    Robust denitrification data suitable to validate soil N2 fluxes in denitrification models are scarce due to methodical limitations and the extreme spatio-temporal heterogeneity of denitrification in soils. Numerical models have become essential tools to predict denitrification at different scales. Model performance could either be tested for total gaseous flux (NO + N2O + N2), individual denitrification products (e.g. N2O and/or NO) or for the effect of denitrification factors (e.g. C-availability, respiration, diffusivity, anaerobic volume, etc.). While there are numerous examples for validating N2O fluxes, there are neither robust field data of N2 fluxes nor sufficiently resolved measurements of control factors used as state variables in the models. Here we present the concept, methods and first results of collecting model validation data. This is part of the coordinated research unit “Denitrification in Agricultural Soils: Integrated Control and Modelling at Various Scales” (DASIM). Novel approaches are used including analysis of stable isotopes, microbial communities, pore structure and organic matter fractions to provide denitrification data sets comprising as much detail on activity and regulation as possible. This will be the basis to validate existing and calibrate new denitrification models that are applied and/or developed by DASIM subprojects. To allow model testing in a wide range of conditions, denitrification control factors are varied in the initial settings (pore volume, plant residues, mineral N, pH) but also over time, where moisture, temperature, and mineral N are manipulated according to typical time patterns in the field. This is realized by including precipitation events, fertilization (via irrigation), drainage (via water potential) and temperature in the course of incubations. Moreover, oxygen concentration is varied to simulate anaerobic events. The 15N gas flux method is employed to quantify N2 and N2O emissions from various pools and processes

    Integrating Collaboration and Activity-Oriented Planning for Coalition Operations Support

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.The use of planning assistant agents is an appropriate option to provide support for members of a coalition. Planning agents can extend the human abilities and be customised to attend different kinds of activities. However, the implementation of a planning framework must also consider other important requirements for coalitions, such as the performance of collaborative activities and human-agent interaction (HAI). This paper discusses the integration of an activity-oriented planning with collaborative concepts using a constraint-based ontology for that. While the use of collaborative concepts provides a better performance to systems as a whole, a unified representation of planning and collaboration enables an easy customisation of activity handlers and the basis for a future incorporation of HAI mechanisms

    Validating soil denitrification models based on laboratory N2 and N2O fluxes and underlying processes: evaluation of DailyDayCent and COUP models

    Get PDF
    Denitrification is an anaerobic key process by microbes where the NO3- is step-by-step reduced and emitted as NO, N2O and finally N2 gas from the soil. Accurate knowledge on denitrification dynamics is important because the N2O is further reduced to N2 and constitutes the main emission source of this greenhouse gas from agricultural soils. Hence, our understanding and ability to quantify soil denitrification is crucial for mitigating nitrogen fertilizer loss as well as for reducing N2O emissions. Models can be an important tool to predict mitigation effects and help to develop climate smart mitigation strategies. Ideally, commonly used biogeochemical models could provide adequate predictions of denitrification processes of agricultural soils but often simplified process descriptions and inadequate model parameters prevent models from simulating adequate fluxes of N2 and N2O on field scale. Model development and parametrization often suffers from limited availability of empirical data describing denitrification processes in agricultural soils. While in many studies N2O emissions are used to develop and train models, detailed measurements on NO, N2O, N2 fluxes and concentrations and related soil conditions are necessary to develop and test adequate model algorithms. To address this issue the coordinated research unit „Denitrification in Agricultural Soils: Integrated Control and Modelling at Various Scales (DASIM)” was initiated to more closely investigate N-fluxes caused by denitrification in response to environmental effects, soil properties and microbial communities. Here, we present how we will use these data to evaluate common biogeochemical process models (DailyDayCent, Coup) with respect to modeled NO, N2O and N2 fluxes from denitrification. The models are used with different settings. The first approximation is the basic “factory” setting of the models. The next step would show the precision in the results of the modeling after adjusting the appropriate parameters from the result of the measurement values and the “factory” results. The better adjustment and the well-controlled input and output measured parameters could provide a better understanding of the probable scantiness of the tested models which will be a basis for future model improvement

    Probabilistic approaches for modeling text structure and their application to text-to-text generation

    Get PDF
    Since the early days of generation research, it has been acknowledged that modeling the global structure of a document is crucial for producing coherent, readable output. However, traditional knowledge-intensive approaches have been of limited utility in addressing this problem since they cannot be effectively scaled to operate in domain-independent, large-scale applications. Due to this difficulty, existing text-to-text generation systems rarely rely on such structural information when producing an output text. Consequently, texts generated by these methods do not match the quality of those written by humans – they are often fraught with severe coherence violations and disfluencies. In this chapter, I will present probabilistic models of document structure that can be effectively learned from raw document collections. This feature distinguishes these new models from traditional knowledge intensive approaches used in symbolic concept-to-text generation. Our results demonstrate that these probabilistic models can be directly applied to content organization, and suggest that these models can prove useful in an even broader range of text-to-text applications than we have considered here.National Science Foundation (U.S.) (CAREER grant IIS- 0448168)Microsoft Research. New Faculty Fellowshi

    A computational approach to implicit entities and events in text and discourse

    Get PDF
    In this paper we will focus on the notion of “implicit” or lexically unexpressed linguistic elements that are nonetheless necessary for a complete semantic interpretation of a text. We refer to “entities” and “events” because the recovery of the implicit material may affect all the modules of a system for semantic processing, from the grammatically guided components to the inferential and reasoning ones. Reference to the system GETARUNS offers one possible implementation of the algorithms and procedures needed to cope with the problem and enables us to deal with all the spectrum of phenomena. The paper will address at first the following three types of “implicit” entities and events: – the grammatical ones, as suggested by a linguistic theories like LFG or similar generative theories; – the semantic ones suggested in the FrameNet project, i.e. CNI, DNI, INI; – the pragmatic ones: here we will present a theory and an implementation for the recovery of implicit entities and events of (non-) standard implicatures. In particular we will show how the use of commonsense knowledge may fruitfully contribute to find relevant implied meanings. Last Implicit Entity only touched on, though for lack of space, is the Subject of Point of View, which is computed by Semantic Informational Structure and contributes the intended entity from whose point of view a given subjective statement is expressed

    Situation based strategic positioning for coordinating a team of homogeneous agents

    Get PDF
    . In this paper we are proposing an approach for coordinating a team ofhomogeneous agents based on a flexible common Team Strategy as well as onthe concepts of Situation Based Strategic Positioning and Dynamic Positioningand Role Exchange. We also introduce an Agent Architecture including a specifichigh-level decision module capable of implementing this strategy. Ourproposal is based on the formalization of what is a team strategy for competingwith an opponent team having opposite goals. A team strategy is composed of aset of agent types and a set of tactics, which are also composed of several formations.Formations are used for different situations and assign each agent a defaultspatial positioning and an agent type (defining its behaviour at several levels).Agents reactivity is also introduced for appropriate response to the dynamicsof the current situation. However, in our approach this is done in a way thatpreserves team coherence instead of permitting uncoordinated agent behaviour.We have applied, with success, this coordination approach to the RoboSoccersimulated domain. The FC Portugal team, developed using this approach wonthe RoboCup2000 (simulation league) European and World championshipsscoring a total of 180 goals and conceding none
    • 

    corecore